Facebook took action against 16.2 million pieces of content in India: Meta
HIGHLIGHTS
In October, Facebook took action against approximately 18.8 million pieces of material.
Hate speech and spam were among the types of content that were taken down.
In October, Instagram acted quickly on nearly 3 million items.
Why in news
During the month of November, over 16.2 million material pieces were 'actioned' proactively on Facebook across 13 violation categories in India, according to social media company Meta. According to statistics released in a compliance report, Instagram's photo-sharing platform took preemptive action against over 3.2 million items across 12 categories over the same time period. Large digital platforms (with more than 5 million users) are required to produce monthly compliance reports under new IT laws that went into effect earlier this year, including the details of complaints received and actions taken.
It also provides information on content that has been deleted or blocked as a result of proactive monitoring with automated techniques. In October, Facebook 'actioned' over 18.8 million pieces of content across 13 categories, while Instagram 'actioned' over 3 million pieces across 12 categories during the same time period . According to Meta's latest report, Facebook received 519 user claims through its Indian complaint system from November 1 and November 30. 'We offered options for consumers to remedy their difficulties in 461 of these inbound complaints,' the study stated. It stated that they include pre-established routes for reporting material for particular infractions, self-remediation pathways where customers may retrieve their data, and avenues for dealing with account hacked concerns, among other things.
Instagram received 424 complaints through the Indian grievance process between November 1 and November 30. The parent company of Facebook, Meta, has lately changed its name. Facebook, WhatsApp, Instagram, Messenger, and Oculus are among the apps in the Meta category. According to the newest data, Facebook took action on over 16.2 million content pieces in November, including spam (11 million), violent and graphic content (2 million), adult nudity and sexual activity (1.5 million), and hate speech (1.5 million) (100,100). Bullying and harassment (102,700), suicide and self-injury (370,500), dangerous organisations and individuals: terrorist propaganda (71,700), and dangerous organisations and individuals: organised hatred (71,700) are some of the other categories where content was taken down (12,400). 163,200 content items were actioned in the Child Endangerment - Nudity and Physical Abuse category, 700,300 in the Child Endangerment - Sexual Exploitation category, and 190,500 in the Violence and Incitement area. The amount of pieces of material (such as posts, images, videos, or comments) for which action has been taken due to a breach of standards is referred to as 'actioned' content.
Taking action might involve deleting a piece of content from Facebook or Instagram, or notifying people about photographs or videos that are upsetting to them. In majority of these situations, the proactive rate, which reflects the proportion of all content or accounts acted on that Facebook discovered and marked using technology before people reported them, varied from 60.5 to 99.9%.Because this information is contextual and extremely personal, the proactive rate for removing content connected to bullying and harassment was 40.7 percent. In many cases, users must report this behaviour to Facebook before it can be identified and removed. During November 2021, Instagram processed approximately 3.2 million pieces of content across 12 categories. Suicide and self-injury material (815,800), violent and graphic content (333,400), adult nudity and sexual activity (466,200), and bullying and harassment content (815,800). (285,900).
Hate speech (24,900), dangerous organisations and individuals: terrorist propaganda (8,400), dangerous organisations and individuals: organised hate (1,400), child endangerment - Nudity and Physical Abuse (41,100), and Violence and Incitement (41,100) are some of the other categories where content has been taken down (27,500). In November, 1.2 million pieces of material in the Child Endangerment - Sexual Exploitation category were proactively actioned.